17 research outputs found

    A HUMAN-CENTERED APPROACH TO IMPROVING THE USER EXPERIENCE OF SOFTWARE UPDATES

    Get PDF
    Software updates are critical to the security of software systems and devices. Yet users often do not install them in a timely manner, leaving their devices open to security exploits. This research explored a re-design of automatic software updates on desktop and mobile devices to improve the uptake of updates through three studies. First using interviews, we studied users’ updating patterns and behaviors on desktop machines in a formative study. Second, we distilled these findings into the design of a low-fi prototype for desktops, and evaluated its efficacy for automating updates by means of a think-aloud study. Third, we investigated individual differences in update automation on Android devices using a large scale survey, and interviews. In this thesis, I present the findings of all three studies and provide evidence for how automatic updates can be better appropriated to fit users on both desktops and mobile devices. Additionally, I provide user interface design suggestions for software updates and outline recommendations for future work to improve the user experience of software updates

    You, Me, and IoT: How Internet-Connected Consumer Devices Affect Interpersonal Relationships

    Full text link
    Internet-connected consumer devices have rapidly increased in popularity; however, relatively little is known about how these technologies are affecting interpersonal relationships in multi-occupant households. In this study, we conduct 13 semi-structured interviews and survey 508 individuals from a variety of backgrounds to discover and categorize how consumer IoT devices are affecting interpersonal relationships in the United States. We highlight several themes, providing large-scale exploratory data about the pervasiveness of interpersonal costs and benefits of consumer IoT devices. These results also inform follow-up studies and design priorities for future IoT technologies to amplify positive and reduce negative interpersonal effects.Comment: 26 pages, 5 figures, 5 tables. Updated version with additional examples and minor revisions. Original title: "You, Me, and IoT: How Internet-Connected Home Devices Affect Interpersonal Relationships

    Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites

    Full text link
    Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions. We present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, we study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. Analyzing ~53K product pages from ~11K shopping websites, we discover 1,818 dark pattern instances, together representing 15 types and 7 broader categories. We examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices. We also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, we develop a taxonomy of dark pattern characteristics that describes the underlying influence of the dark patterns and their potential harm on user decision-making. Based on our findings, we make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.Comment: 32 pages, 11 figures, ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2019

    Evaluating the End-User Experience of Private Browsing Mode

    Get PDF
    Nowadays, all major web browsers have a private browsing mode. However, the mode's benefits and limitations are not particularly understood. Through the use of survey studies, prior work has found that most users are either unaware of private browsing or do not use it. Further, those who do use private browsing generally have misconceptions about what protection it provides. However, prior work has not investigated \emph{why} users misunderstand the benefits and limitations of private browsing. In this work, we do so by designing and conducting a three-part study: (1) an analytical approach combining cognitive walkthrough and heuristic evaluation to inspect the user interface of private mode in different browsers; (2) a qualitative, interview-based study to explore users' mental models of private browsing and its security goals; (3) a participatory design study to investigate why existing browser disclosures, the in-browser explanations of private browsing mode, do not communicate the security goals of private browsing to users. Participants critiqued the browser disclosures of three web browsers: Brave, Firefox, and Google Chrome, and then designed new ones. We find that the user interface of private mode in different web browsers violates several well-established design guidelines and heuristics. Further, most participants had incorrect mental models of private browsing, influencing their understanding and usage of private mode. Additionally, we find that existing browser disclosures are not only vague, but also misleading. None of the three studied browser disclosures communicates or explains the primary security goal of private browsing. Drawing from the results of our user study, we extract a set of design recommendations that we encourage browser designers to validate, in order to design more effective and informative browser disclosures related to private mode

    Identifying and measuring manipulative user interfaces at scale on the web

    No full text
    Powerful and otherwise trustworthy actors on the web gain from manipulating users and pushing them into making sub-optimal decisions. While prior work has documented examples of such manipulative practices, we lack a systematic understanding of their characteristics and their prevalence on the web. Building up this knowledge can lead to solutions that protect individuals and society from their harms. In this dissertation, I focus on manipulative practices that manifest in the user interface. I first describe the attributes of manipulative user interfaces. I show that these interfaces engineer users' choice architectures by either modifying the information available to users, or by modifying the set of choices available to users---eliminating and suppressing choices that disadvantage the manipulator. I then present the core contribution of this dissertation: automated methods that combine web automation and machine learning to identify manipulative interfaces at scale on the web. Using these methods, I conduct three measurements. First, I examine the extent to which content creators fail to disclose their endorsements on social media---misleading users into believing they are viewing unbiased, non-advertising content. Collecting and analyzing a dataset of 500K YouTube videos and 2 million Pinterest pins, I discover that ~90% of these endorsements go undisclosed. Second, I quantify the prevalence of dark patterns on shopping websites. Analyzing data I collected from 11K shopping websites, I discover 1,818 dark patterns on 1,254 websites that mislead, deceive, or coerce users into making more purchases or disclosing more information than they would otherwise. Finally, I quantify the prevalence of dark patterns and clickbait in political emails. Collecting and analyzing a dataset of over 100K emails from U.S. political campaigns and organizations in the 2020 election cycle, I find that that ~40% of emails sent by the median campaign/organization contain these manipulative interfaces. I conclude with how the lessons learned from these measurements can be used to build technical defenses and to lay out policy recommendations to mitigate the spread of these interfaces
    corecore